Do I believe it?
Hmmm. The results presented deal with a discrete problem. I'm not sure if the results apply to continuous systems as well or not. The whole notion of NP-completeness and combinatorial complexity deals with discrete systems. For continuous systems, information-based complexity (IBC) might be a better way of analyzing the problem. Note that the "fruit-fly" of IBC research is multidimensional integration.
However, the distinction between continuous and discrete might not be important. For instance, with global optimization, finding the locations of the local minima is "easy", but then one is left picking between many different discrete local minima to find the global minima (which makes it a potentially "hard" combinatorial problem - assuming the number of local minima grows exponentially with problem size).
I'm still perplexed as the origins of the difficulty of the FSP (actually, I don't understand the origin of difficulties for most NP problems.) So a quick look into the form of the integrals is in order.
We start with wanting the integral I = int f(x) dx, where x is really a vector of high dimension. Monte Carlo fits the bill nicely, having error bounds independent of dimension. But f(x) is strongly peaked - only small volumes contribute to most of the integral. To be efficient, we need importance sampling. The usually trick is to split f(x) into a product: p(x)g(x), where p(x) >= 0. Since p is positive, we can interpret it as a probability and use various sampling methods to get int p(x) g(x) dx / int p(x) dx.
We can get away with this in QMC by assuming that p(x) = psi^2(x), which is positive. But in Diffusion Monte Carlo, p(x) = psi(x) phi(x), where psi(x) is a guess and phi(x) is the exact ground state wavefunction. Now psi and phi will not have zeros at exactly the same location, so p(x) will have some negative regions.
The trick for dealing with negative p(x) is to split it once again into: sgn(p) abs(p) and sample abs(p), and move sgn(p) over with g(x). The integral now looks like
[int abs(p) sgn(p) g(x) / int abs(p)]/[int abs(p) sgn (p) / int abs(p)]
Here's where I get lost. One source of the problem could the denominator - since it is the average of a signed quantity, it could be small (at least relative to the errors) and magnify errors in the numerator? But the paper says that both the numerator and denominator have errors that grow exponentially in system size. So the denominator is an irritant, but not the root cause.
The error of the average sign (again from the paper) is related to the exponential of the free energy difference between the fermionic problem (with p(x)) and bosonic problem (with abs(p(x)). So the fermionic free energy is necessarily smaller because of cancellation induced by minus signs. But I still don't see how this makes the problem hard.
Why not add a sufficiently large constant to p to make it postive? Now there are no minus signs anywhere. But the essential features of the integrand haven't changed - so it can't be a "sign" problem. I guess I view the complexity of integration as coming from the unpredictable fluctations of the integrand - wildy fluctating integrands are "harder" to do than smooth integrands. Adding a constant doesn't change this.
Perhaps the antisymmetry requirements produce a function that has nearly equal positive and negative parts. Each one separately has a small relative error, but the nearly exact cancellation causes great magnification of the error (a well known numerical analysis bugaboo). Thus, the experiment of adding a constant doesn't change anything - it's added equally to both parts, and cancels exactly.